Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 99
Filtrar
1.
Neuroscience ; 543: 101-107, 2024 Apr 05.
Artigo em Inglês | MEDLINE | ID: mdl-38432549

RESUMO

In natural viewing conditions, the brain can optimally integrate retinal and extraretinal signals to maintain a stable visual perception. These mechanisms, however, may fail in circumstances where extraction of a motion signal is less viable such as impoverished visual scenes. This can result in a phenomenon known as autokinesis in which one may experience apparent motion of a small visual stimulus in an otherwise completely dark environment. In this study, we examined the effect of autokinesis on visual perception of motion in human observers. We used a novel method with optical tracking in which the visual motion was reported manually by the observer. Experiment results show at lower speeds of motion, the perceived direction of motion was more aligned with the effect of autokinesis, whereas in the light or at higher speeds in the dark, it was more aligned with the actual direction of motion. These findings have important implications for understanding how the stability of visual representation in the brain can affect accurate perception of motion signals.


Assuntos
Percepção de Movimento , Humanos , Percepção Visual , Visão Ocular , Desempenho Psicomotor , Retina
2.
IEEE Trans Med Imaging ; 43(1): 275-285, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37549070

RESUMO

Image-based 2D/3D registration is a critical technique for fluoroscopic guided surgical interventions. Conventional intensity-based 2D/3D registration approa- ches suffer from a limited capture range due to the presence of local minima in hand-crafted image similarity functions. In this work, we aim to extend the 2D/3D registration capture range with a fully differentiable deep network framework that learns to approximate a convex-shape similarity function. The network uses a novel Projective Spatial Transformer (ProST) module that has unique differentiability with respect to 3D pose parameters, and is trained using an innovative double backward gradient-driven loss function. We compare the most popular learning-based pose regression methods in the literature and use the well-established CMAES intensity-based registration as a benchmark. We report registration pose error, target registration error (TRE) and success rate (SR) with a threshold of 10mm for mean TRE. For the pelvis anatomy, the median TRE of ProST followed by CMAES is 4.4mm with a SR of 65.6% in simulation, and 2.2mm with a SR of 73.2% in real data. The CMAES SRs without using ProST registration are 28.5% and 36.0% in simulation and real data, respectively. Our results suggest that the proposed ProST network learns a practical similarity function, which vastly extends the capture range of conventional intensity-based 2D/3D registration. We believe that the unique differentiable property of ProST has the potential to benefit related 3D medical imaging research applications. The source code is available at https://github.com/gaocong13/Projective-Spatial-Transformers.


Assuntos
Imageamento Tridimensional , Pelve , Imageamento Tridimensional/métodos , Fluoroscopia/métodos , Software , Algoritmos
3.
Crit Rev Biomed Eng ; 51(6): 29-50, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37824333

RESUMO

The handheld drill has been used as a conventional surgical tool for centuries. Alongside the recent successes of surgical robots, the development of new and enhanced medical drills has improved surgeon ability without requiring the high cost and consuming setup times that plague medical robot systems. This work provides an overview of enhanced handheld surgical drill research focusing on systems that include some form of image guidance and do not require additional hardware that physically supports or guides drilling. Drilling is reviewed by main contribution divided into audio-, visual-, or hardware-enhanced drills. A vision for future work to enhance handheld drilling systems is also discussed.


Assuntos
Equipamentos Cirúrgicos
4.
Artigo em Inglês | MEDLINE | ID: mdl-37555198

RESUMO

Magnetic Resonance Imaging (MRI) is a medical imaging modality that allows for the evaluation of soft-tissue diseases and the assessment of bone quality. Preoperative MRI volumes are used by surgeons to identify defected bones, perform the segmentation of lesions, and generate surgical plans before the surgery. Nevertheless, conventional intraoperative imaging modalities such as fluoroscopy are less sensitive in detecting potential lesions. In this work, we propose a 2D/3D registration pipeline that aims to register preoperative MRI with intraoperative 2D fluoroscopic images. To showcase the feasibility of our approach, we use the core decompression procedure as a surgical example to perform 2D/3D femur registration. The proposed registration pipeline is evaluated using digitally reconstructed radiographs (DRRs) to simulate the intraoperative fluoroscopic images. The resulting transformation from the registration is later used to create overlays of preoperative MRI annotations and planning data to provide intraoperative visual guidance to surgeons. Our results suggest that the proposed registration pipeline is capable of achieving reasonable transformation between MRI and digitally reconstructed fluoroscopic images for intraoperative visualization applications.

5.
ArXiv ; 2023 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-37576124

RESUMO

Longitudinal tracking of skin lesions - finding correspondence, changes in morphology, and texture - is beneficial to the early detection of melanoma. However, it has not been well investigated in the context of full-body imaging. We propose a novel framework combining geometric and texture information to localize skin lesion correspondence from a source scan to a target scan in total body photography (TBP). Body landmarks or sparse correspondence are first created on the source and target 3D textured meshes. Every vertex on each of the meshes is then mapped to a feature vector characterizing the geodesic distances to the landmarks on that mesh. Then, for each lesion of interest (LOI) on the source, its corresponding location on the target is first coarsely estimated using the geometric information encoded in the feature vectors and then refined using the texture information. We evaluated the framework quantitatively on both a public and a private dataset, for which our success rates (at 10 mm criterion) are comparable to the only reported longitudinal study. As full-body 3D capture becomes more prevalent and has higher quality, we expect the proposed method to constitute a valuable step in the longitudinal tracking of skin lesions.

6.
Int J Comput Assist Radiol Surg ; 18(7): 1329-1334, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37162733

RESUMO

PURPOSE: The use of robotic continuum manipulators has been proposed to facilitate less-invasive orthopedic surgical procedures. While tools and strategies have been developed, critical challenges such as system control and intra-operative guidance are under-addressed. Simulation tools can help solve these challenges, but several gaps limit their utility for orthopedic surgical systems, particularly those with continuum manipulators. Herein, a simulation platform which addresses these gaps is presented as a tool to better understand and solve challenges for minimally invasive orthopedic procedures. METHODS: An open-source surgical simulation software package was developed in which a continuum manipulator can interact with any volume model such as to drill bone volumes segmented from a 3D computed tomography (CT) image. Paired simulated X-ray images of the scene can also be generated. As compared to previous works, tool-anatomy interactions use a physics-based approach which leads to more stable behavior and wider procedure applicability. A new method for representing low-level volumetric drilling behavior is also introduced to capture material variability within bone as well as patient-specific properties from a CT. RESULTS: Similar interaction between a continuum manipulator and phantom bone was also demonstrated between a simulated manipulator and volumetric obstacle models. High-level material- and tool-driven behavior was shown to emerge directly from the improved low-level interactions, rather than by need of manual programming. CONCLUSION: This platform is a promising tool for developing and investigating control algorithms for tasks such as curved drilling. The generation of simulated X-ray images that correspond to the scene is useful for developing and validating image guidance models. The improvements to volumetric drilling offer users the ability to better tune behavior for specific tools and procedures and enable research to improve surgical simulation model fidelity. This platform will be used to develop and test control algorithms for image-guided curved drilling procedures in the femur.


Assuntos
Procedimentos Ortopédicos , Ortopedia , Robótica , Humanos , Simulação por Computador , Procedimentos Ortopédicos/métodos , Algoritmos
7.
Int J Comput Assist Radiol Surg ; 18(7): 1201-1208, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37213057

RESUMO

PURPOSE: Percutaneous fracture fixation involves multiple X-ray acquisitions to determine adequate tool trajectories in bony anatomy. In order to reduce time spent adjusting the X-ray imager's gantry, avoid excess acquisitions, and anticipate inadequate trajectories before penetrating bone, we propose an autonomous system for intra-operative feedback that combines robotic X-ray imaging and machine learning for automated image acquisition and interpretation, respectively. METHODS: Our approach reconstructs an appropriate trajectory in a two-image sequence, where the optimal second viewpoint is determined based on analysis of the first image. A deep neural network is responsible for detecting the tool and corridor, here a K-wire and the superior pubic ramus, respectively, in these radiographs. The reconstructed corridor and K-wire pose are compared to determine likelihood of cortical breach, and both are visualized for the clinician in a mixed reality environment that is spatially registered to the patient and delivered by an optical see-through head-mounted display. RESULTS: We assess the upper bounds on system performance through in silico evaluation across 11 CTs with fractures present, in which the corridor and K-wire are adequately reconstructed. In post hoc analysis of radiographs across 3 cadaveric specimens, our system determines the appropriate trajectory to within 2.8 ± 1.3 mm and 2.7 ± 1.8[Formula: see text]. CONCLUSION: An expert user study with an anthropomorphic phantom demonstrates how our autonomous, integrated system requires fewer images and lower movement to guide and confirm adequate placement compared to current clinical practice. Code and data are available.


Assuntos
Fraturas Ósseas , Imageamento Tridimensional , Humanos , Raios X , Imageamento Tridimensional/métodos , Fluoroscopia/métodos , Tomografia Computadorizada por Raios X/métodos , Fraturas Ósseas/diagnóstico por imagem , Fraturas Ósseas/cirurgia , Fixação de Fratura , Fixação Interna de Fraturas/métodos
8.
IEEE Trans Med Robot Bionics ; 5(1): 18-29, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37213937

RESUMO

Minimally-invasive Osteoporotic Hip Augmentation (OHA) by injecting bone cement is a potential treatment option to reduce the risk of hip fracture. This treatment can significantly benefit from computer-assisted planning and execution system to optimize the pattern of cement injection. We present a novel robotic system for the execution of OHA that consists of a 6-DOF robotic arm and integrated drilling and injection component. The minimally-invasive procedure is performed by registering the robot and preoperative images to the surgical scene using multiview image-based 2D/3D registration with no external fiducial attached to the body. The performance of the system is evaluated through experimental sawbone studies as well as cadaveric experiments with intact soft tissues. In the cadaver experiments, distance errors of 3.28mm and 2.64mm for entry and target points and orientation error of 2.30° are calculated. Moreover, the mean surface distance error of 2.13mm with translational error of 4.47mm is reported between injected and planned cement profiles. The experimental results demonstrate the first application of the proposed Robot-Assisted combined Drilling and Injection System (RADIS), incorporating biomechanical planning and intraoperative fiducial-less 2D/3D registration on human cadavers with intact soft tissues.

9.
Artigo em Inglês | MEDLINE | ID: mdl-37021885

RESUMO

The use of Augmented Reality (AR) for navigation purposes has shown beneficial in assisting physicians during the performance of surgical procedures. These applications commonly require knowing the pose of surgical tools and patients to provide visual information that surgeons can use during the performance of the task. Existing medical-grade tracking systems use infrared cameras placed inside the Operating Room (OR) to identify retro-reflective markers attached to objects of interest and compute their pose. Some commercially available AR Head-Mounted Displays (HMDs) use similar cameras for self-localization, hand tracking, and estimating the objects' depth. This work presents a framework that uses the built-in cameras of AR HMDs to enable accurate tracking of retro-reflective markers without the need to integrate any additional electronics into the HMD. The proposed framework can simultaneously track multiple tools without having previous knowledge of their geometry and only requires establishing a local network between the headset and a workstation. Our results show that the tracking and detection of the markers can be achieved with an accuracy of 0.09±0.06 mm on lateral translation, 0.42 ±0.32 mm on longitudinal translation and 0.80 ±0.39° for rotations around the vertical axis. Furthermore, to showcase the relevance of the proposed framework, we evaluate the system's performance in the context of surgical procedures. This use case was designed to replicate the scenarios of k-wire insertions in orthopedic procedures. For evaluation, seven surgeons were provided with visual navigation and asked to perform 24 injections using the proposed framework. A second study with ten participants served to investigate the capabilities of the framework in the context of more general scenarios. Results from these studies provided comparable accuracy to those reported in the literature for AR-based navigation procedures.

10.
Int J Comput Assist Radiol Surg ; 18(6): 1017-1024, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37079247

RESUMO

PURPOSE: Image-guided navigation and surgical robotics are the next frontiers of minimally invasive surgery. Assuring safety in high-stakes clinical environments is critical for their deployment. 2D/3D registration is an essential, enabling algorithm for most of these systems, as it provides spatial alignment of preoperative data with intraoperative images. While these algorithms have been studied widely, there is a need for verification methods to enable human stakeholders to assess and either approve or reject registration results to ensure safe operation. METHODS: To address the verification problem from the perspective of human perception, we develop novel visualization paradigms and use a sampling method based on approximate posterior distribution to simulate registration offsets. We then conduct a user study with 22 participants to investigate how different visualization paradigms (Neutral, Attention-Guiding, Correspondence-Suggesting) affect human performance in evaluating the simulated 2D/3D registration results using 12 pelvic fluoroscopy images. RESULTS: All three visualization paradigms allow users to perform better than random guessing to differentiate between offsets of varying magnitude. The novel paradigms show better performance than the neutral paradigm when using an absolute threshold to differentiate acceptable and unacceptable registrations (highest accuracy: Correspondence-Suggesting (65.1%), highest F1 score: Attention-Guiding (65.7%)), as well as when using a paradigm-specific threshold for the same discrimination (highest accuracy: Attention-Guiding (70.4%), highest F1 score: Corresponding-Suggesting (65.0%)). CONCLUSION: This study demonstrates that visualization paradigms do affect the human-based assessment of 2D/3D registration errors. However, further exploration is needed to understand this effect better and develop more effective methods to assure accuracy. This research serves as a crucial step toward enhanced surgical autonomy and safety assurance in technology-assisted image-guided surgery.


Assuntos
Imageamento Tridimensional , Cirurgia Assistida por Computador , Humanos , Imageamento Tridimensional/métodos , Cirurgia Assistida por Computador/métodos , Fluoroscopia , Pelve , Tecnologia , Algoritmos
12.
IEEE Sens J ; 23(12): 12915-12929, 2023 Jun 15.
Artigo em Inglês | MEDLINE | ID: mdl-38558829

RESUMO

Continuum dexterous manipulators (CDMs) are suitable for performing tasks in a constrained environment due to their high dexterity and maneuverability. Despite the inherent advantages of CDMs in minimally invasive surgery, real-time control of CDMs' shape during nonconstant curvature bending is still challenging. This study presents a novel approach for the design and fabrication of a large deflection fiber Bragg grating (FBG) shape sensor embedded within the lumens inside the walls of a CDM with a large instrument channel. The shape sensor consisted of two fibers, each with three FBG nodes. A shape-sensing model was introduced to reconstruct the centerline of the CDM based on FBG wavelengths. Different experiments, including shape sensor tests and CDM shape reconstruction tests, were conducted to assess the overall accuracy of the shape-sensing. The FBG sensor evaluation results revealed the linear curvature-wavelength relationship with the large curvature detection of 0.045 mm and a high wavelength shift of up to 5.50 nm at a 90° bending angle in both the bending directions. The CDM's shape reconstruction experiments in a free environment demonstrated the shape-tracking accuracy of 0.216 ± 0.126 mm for positive/negative deflections. Also, the CDM shape reconstruction error for three cases of bending with obstacles was observed to be 0.436 ± 0.370 mm for the proximal case, 0.485 ± 0.418 mm for the middle case, and 0.312 ± 0.261 mm for the distal case. This study indicates the adequate performance of the FBG sensor and the effectiveness of the model for tracking the shape of the large-deflection CDM with nonconstant-curvature bending for minimally invasive orthopedic applications.

13.
Artigo em Inglês | MEDLINE | ID: mdl-38566770

RESUMO

Accurate depth estimation poses a significant challenge in egocentric Augmented Reality (AR), particularly for precision-dependent tasks in the medical field, such as needle or tool insertions during percutaneous procedures. Augmented Mirrors (AMs) provide a unique solution to this problem by offering additional non-egocentric viewpoints that enhance spatial understanding of an AR scene. Despite the perceptual advantages of using AMs, their practical utility has yet to be thoroughly tested. In this work, we present results from a pilot study involving five participants tasked with simulating epidural injection procedures in an AR environment, both with and without the aid of an AM. Our findings indicate that using AM contributes to reducing mental effort while improving alignment accuracy. These results highlight the potential of AM as a powerful tool for AR-enabled medical procedures, setting the stage for future exploration involving medical professionals.

14.
Proc IEEE Sens ; 20232023.
Artigo em Inglês | MEDLINE | ID: mdl-38577480

RESUMO

We propose a novel inexpensive embedded capacitive sensor (ECS) for sensing the shape of Continuum Dexterous Manipulators (CDMs). Our approach addresses some limitations associated with the prevalent Fiber Bragg Grating (FBG) sensors, such as temperature sensitivity and high production costs. ECSs are calibrated using a vision-based system. The calibration of the ECS is performed by a recurrent neural network that uses the kinematic data collected from the vision-based system along with the uncalibrated data from ECSs. We evaluated the performance on a 3D printed prototype of a cable-driven CDM with multiple markers along its length. Using data from three ECSs along the length of the CDM, we computed the angle and position of its tip with respect to its base and compared the results to the measurements of the visual-based system. We found a 6.6% tip position error normalized to the length of the CDM. The work shows the early feasibility of using ECSs for shape sensing and feedback control of CDMs and discusses potential future improvements.

15.
Med Image Comput Comput Assist Interv ; 14228: 133-143, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38617200

RESUMO

Surgical phase recognition (SPR) is a crucial element in the digital transformation of the modern operating theater. While SPR based on video sources is well-established, incorporation of interventional X-ray sequences has not yet been explored. This paper presents Pelphix, a first approach to SPR for X-ray-guided percutaneous pelvic fracture fixation, which models the procedure at four levels of granularity - corridor, activity, view, and frame value - simulating the pelvic fracture fixation workflow as a Markov process to provide fully annotated training data. Using added supervision from detection of bony corridors, tools, and anatomy, we learn image representations that are fed into a transformer model to regress surgical phases at the four granularity levels. Our approach demonstrates the feasibility of X-ray-based SPR, achieving an average accuracy of 99.2% on simulated sequences and 71.7% in cadaver across all granularity levels, with up to 84% accuracy for the target corridor in real data. This work constitutes the first step toward SPR for the X-ray domain, establishing an approach to categorizing phases in X-ray-guided surgery, simulating realistic image sequences to enable machine learning model development, and demonstrating that this approach is feasible for the analysis of real procedures. As X-ray-based SPR continues to mature, it will benefit procedures in orthopedic surgery, angiography, and interventional radiology by equipping intelligent surgical systems with situational awareness in the operating room.

16.
Artigo em Inglês | MEDLINE | ID: mdl-38487569

RESUMO

The integration of navigation capabilities into the operating room has enabled surgeons take on more precise procedures guided by a pre-operative plan. Traditionally, navigation information based on this plan is presented using monitors in the surgical theater. But the monitors force the surgeon to frequently look away from the surgical area. Alternative technologies, such as augmented reality, have enabled surgeons to visualize navigation information in-situ. However, burdening the visual field with additional information can be distracting. In this work, we propose integrating haptic feedback into a surgical tool handle to enable surgical guidance capabilities. This property reduces the amount of visual information, freeing surgeons to maintain visual attention over the patient and the surgical site. To investigate the feasibility of this guidance paradigm we conducted a pilot study with six subjects. Participants traced paths, pinpointed locations and matched alignments with a mock surgical tool featuring a novel haptic handle. We collected quantitative data, tracking user's accuracy and time to completion as well as subjective cognitive load. Our results show that haptic feedback can guide participants using a tool to sub-millimeter and sub-degree accuracy with only little training. Participants were able to match a location with an average error of 0.82 mm, desired pivot alignments with an average error of 0.83° and desired rotations to 0.46°.

17.
Nat Mach Intell ; 5(3): 294-308, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38523605

RESUMO

Artificial intelligence (AI) now enables automated interpretation of medical images. However, AI's potential use for interventional image analysis remains largely untapped. This is because the post hoc analysis of data collected during live procedures has fundamental and practical limitations, including ethical considerations, expense, scalability, data integrity and a lack of ground truth. Here we demonstrate that creating realistic simulated images from human models is a viable alternative and complement to large-scale in situ data collection. We show that training AI image analysis models on realistically synthesized data, combined with contemporary domain generalization techniques, results in machine learning models that on real data perform comparably to models trained on a precisely matched real data training set. We find that our model transfer paradigm for X-ray image analysis, which we refer to as SyntheX, can even outperform real-data-trained models due to the effectiveness of training on a larger dataset. SyntheX provides an opportunity to markedly accelerate the conception, design and evaluation of X-ray-based intelligent systems. In addition, SyntheX provides the opportunity to test novel instrumentation, design complementary surgical approaches, and envision novel techniques that improve outcomes, save time or mitigate human error, free from the ethical and practical considerations of live human data collection.

18.
Artigo em Inglês | MEDLINE | ID: mdl-38179232

RESUMO

Osteonecrosis of the Femoral Head (ONFH) is a progressive disease characterized by the death of bone cells due to the loss of blood supply. Early detection and treatment of this disease are vital in avoiding Total Hip Replacement. Early stages of ONFH can be diagnosed using Magnetic Resonance Imaging (MRI), commonly used intra-operative imaging modalities such as fluoroscopy frequently fail to depict the lesion. Therefore, increasing the difficulty of intra-operative localization of osteonecrosis. This work introduces a novel framework that enables the localization of necrotic lesions in Computed Tomography (CT) as a step toward localizing and visualizing necrotic lesions in intra-operative images. The proposed framework uses Deep Learning algorithms to enable automatic segmentation of femur, pelvis, and necrotic lesions in MRI. An additional step performs semi-automatic segmentation of these anatomies, excluding the necrotic lesions, in CT. A final step performs pairwise registration of the corresponding anatomies, allowing for the localization and visualization of the necrosis in CT. To investigate the feasibility of integrating the proposed framework in the surgical workflow, we conducted experiments on MRIs and CTs containing early-stage ONFH. Our results indicate that the proposed framework is able to segment the anatomical structures of interest and accurately register the femurs and pelvis of the corresponding volumes, allowing for the visualization and localization of the ONFH in CT and generated X-rays, which could enable intra-operative visualization of the necrotic lesions for surgical procedures such as core decompression of the femur.

19.
IEEE Trans Robot ; 38(2): 1213-1229, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35633946

RESUMO

This article presents a dexterous robotic system for autonomous debridement of osteolytic bone lesions in confined spaces. The proposed system is distinguished from the state-of-the-art orthopedics systems because it combines a rigid-link robot with a continuum manipulator (CM) that enhances reach in difficult-to-access spaces often encountered in surgery. The CM is equipped with flexible debriding instruments and fiber Bragg grating sensors. The surgeon plans on the patient's preoperative computed tomography and the robotic system performs the task autonomously under the surgeon's supervision. An optimization-based controller generates control commands on the fly to execute the task while satisfying physical and safety constraints. The system design and controller are discussed and extensive simulation, phantom and human cadaver experiments are carried out to evaluate the performance, workspace, and dexterity in confined spaces. Mean and standard deviation of target placement are 0.5 and 0.18 mm, and the robotic system covers 91% of the workspace behind an acetabular implant in treatment of hip osteolysis, compared to the 54% that is achieved by conventional rigid tools.

20.
IEEE Trans Med Robot Bionics ; 4(4): 901-909, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37790985

RESUMO

We present an autonomous robotic spine needle injection system using fluoroscopic image-based navigation. Our system includes patient-specific planning, intra-operative image-based 2D/3D registration and navigation, and automatic robot-guided needle injection. We performed intensive simulation studies to validate the registration accuracy. We achieved a mean spine vertebrae registration error of 0.8 ± 0.3 mm, 0.9 ± 0.7 degrees, mean injection device registration error of 0.2 ± 0.6 mm, 1.2 ± 1.3 degrees, in translation and rotation, respectively. We then conducted cadaveric studies comparing our system to an experienced clinician's free-hand injections. We achieved a mean needle tip translational error of 5.1 ± 2.4 mm and needle orientation error of 3.6 ± 1.9 degrees for robotic injections, compared to 7.6 ± 2.8 mm and 9.9 ± 4.7 degrees for clinician's free-hand injections, respectively. During injections, all needle tips were placed within the defined safety zones for this application. The results suggest the feasibility of using our image-guided robotic injection system for spinal orthopedic applications.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...